Extending Defensive Distillation
نویسندگان
چکیده
Machine learning is vulnerable to adversarial examples: inputs carefully modified to force misclassification. Designing defenses against such inputs remains largely an open problem. In this work, we revisit defensive distillation—which is one of the mechanisms proposed to mitigate adversarial examples—to address its limitations. We view our results not only as an effective way of addressing some of the recently discovered attacks but also as reinforcing the importance of improved training techniques.
منابع مشابه
On the Effectiveness of Defensive Distillation
We report experimental results indicating that defensive distillation successfully mitigates adversarial samples crafted using the fast gradient sign method [2], in addition to those crafted using the Jacobian-based iterative attack [5] on which the defense mechanism was originally evaluated.
متن کاملDefensive Distillation is Not Robust to Adversarial Examples
We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks.
متن کاملFeature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples
Deep Neural Networks (DNNs) have achieved remarkable performance in a myriad of realistic applications. However, recent studies show that welltrained DNNs can be easily misled by adversarial examples (AE) – the maliciously crafted inputs by introducing small and imperceptible input perturbations. Existing mitigation solutions, such as adversarial training and defensive distillation, suffer from...
متن کاملDynamic Behavior of Alternative Separation Processes for Ethanol Dehydration by Extractive Distillation
Ethanol is attracting the attention of researchers because of its potential in reducing the dependence on crude oil together with the possible reduction in the pollution associated with the combustion process. The ethanol dehydration process is significant in terms of its production cost. Recently, new distillation sequences have been proposed for the separation of pure ethanol from the ferment...
متن کاملWorks: an Extreme Value Theory Approach
The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this paper, we provide a theoretical justification for converting robustness analysis into a local Lipschitz cons...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1705.05264 شماره
صفحات -
تاریخ انتشار 2017